ai system
These aren't AI firms, they're defense contractors. We can't let them hide behind their models
We can't let them hide behind their models From Gaza to Iran, the pattern is the same: precision weapons, chosen blindness, and dead children. There is an Israeli military strategy called the "fog procedure". First used during the second intifada, it's an unofficial rule that requires soldiers guarding military posts in conditions of low visibility to shoot bursts of gunfire into the darkness, on the theory that an invisible threat might be lurking. It's violence licensed by blindness. Shoot into the darkness and call it deterrence. With the dawn of AI warfare, that same logic of chosen blindness has been refined, systematized, and handed off to a machine.
- Asia > Middle East > Palestine > Gaza Strip > Gaza Governorate > Gaza (0.37)
- Asia > Middle East > Iran (0.37)
- Asia > Middle East > Israel (0.15)
- (7 more...)
- Law (1.00)
- Government > Military (1.00)
- Government > Regional Government > North America Government > United States Government (0.95)
- (3 more...)
Why physical AI is becoming manufacturing's next advantage
Why physical AI is becoming manufacturing's next advantage From simulation driven development to real world execution, Microsoft and NVIDIA are helping manufacturers leverage AI to cross the industrial frontier with confidence. For decades, manufacturers have pursued automation to drive efficiency, reduce costs, and stabilize operations. That approach delivered meaningful gains, but it is no longer enough. Today's manufacturing leaders face a different challenge: how to grow amid labor constraints, rising complexity, and increasing pressure to innovate faster without sacrificing safety, quality, or trust. The next phase of transformation will not be defined by isolated AI tools or individual robots, but by intelligence that can operate reliably in the physical world . This is where physical AI--intelligence that can sense, reason, and act in the real world--marks a decisive shift.
Humanoid home robots are on the market – but do we really want them?
Humanoid home robots are on the market - but do we really want them? Last year, Norwegian-US tech company 1X announced a strange new product: "the world's first consumer-ready humanoid robot designed to transform life at home". Standing 168 centimetres tall and weighing in at 30 kilograms, the US$20,000 Neo bot promises to automate common household chores such as folding laundry and loading the dishwasher. Neo has a built-in artificial intelligence (AI) system, but for tricky tasks it requires a 1X employee wearing a virtual reality helmet to remotely take over the robot. The operator can see whatever the bot does inside your house, and the process is recorded for future learning.
The greatest risk of AI in higher education isn't cheating – it's the erosion of learning itself
Public debate about artificial intelligence in higher education has largely orbited a familiar worry: cheating . Will students use chatbots to write essays? Should universities ban the tech? But focusing so much on cheating misses the larger transformation already underway, one that extends far beyond student misconduct and even the classroom. Universities are adopting AI across many areas of institutional life .
RWDS Big Questions: how do we balance innovation and regulation in the world of AI?
RWDS Big Questions: how do we balance innovation and regulation in the world of AI? AI development is accelerating, while regulation moves more deliberately. That tension creates a core challenge: how do we maintain momentum without breaking the things that matter? The aim isn't to slow innovation unnecessarily, but to ensure progress happens at a pace that protects individuals and society. Responsible actors should not be disadvantaged -- yet safeguards are essential to maintain trust. For the latest video in our RWDS Big Questions series, our panel explores this delicate balance.
- North America > United States > Vermont (0.05)
- Europe > United Kingdom > England (0.04)
- Asia > Singapore (0.04)
- Asia > Japan > Honshū > Chūgoku > Hiroshima Prefecture > Hiroshima (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Banking & Finance (1.00)
- (3 more...)
A defense official reveals how AI chatbots could be used for targeting decisions
Though the US military's big data initiative Maven has sped up the planning of strikes for years, the comments suggest that generative AI is now adding a new interpretative layer to such deliberations. The US military might use generative AI systems to rank lists of targets and make recommendations--which would be vetted by humans--about which to strike first, according to a Defense Department official with knowledge of the matter. The disclosure about how the military may use AI chatbots comes as the Pentagon faces scrutiny over a strike on an Iranian school, which it is still investigating. A list of possible targets might be fed into a generative AI system that the Pentagon is fielding for classified settings. Then, said the official, who requested to speak on background with to discuss sensitive topics, humans might ask the system to analyze the information and prioritize the targets while accounting for factors like where aircraft are currently located. Humans would then be responsible for checking and evaluating the results and recommendations.
- South America > Venezuela (0.15)
- Asia > Middle East > Iran (0.07)
- North America > United States > Massachusetts (0.05)
- Government > Military (1.00)
- Government > Regional Government > North America Government > United States Government (0.72)
What Anthropic's Clash With the Pentagon Is Really About
What Anthropic's Clash With the Pentagon Is Really About Who will take responsibility for the technology? The weekslong conflict between Anthropic and the Department of Defense is entering a new phase. After being designated a supply-chain risk by DOD last week, which effectively forbids Pentagon contractors from using its products, the AI company filed a lawsuit against DOD this morning alleging that the government's actions were unconstitutional and ideologically motivated. Then, this afternoon, 37 employees from OpenAI and Google DeepMind--including Google's chief scientist, Jeff Dean--signed an amicus brief in support of Anthropic, in essence lending support to one of their employers' greatest business rivals (even as OpenAI itself has established a controversial new contract with DOD). For the past few weeks, Anthropic has been in heated negotiations with the Pentagon over how the U.S. military can use the firm's AI systems.
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
This Defense Company Made AI Agents That Blow Things Up
Scout AI is using technology borrowed from the AI industry to power lethal weapons--and recently demonstrated its explosive potential. Like many Silicon Valley companies today, Scout AI is training large AI models and agents to automate chores. The big difference is that instead of writing code, answering emails, or buying stuff online, Scout AI's agents are designed to seek and destroy things in the physical world with exploding drones. In a recent demonstration, held at an undisclosed military base in central California, Scout AI's technology was put in charge of a self-driving off-road vehicle and a pair of lethal drones. The agents used these systems to find a truck hiding in the area, and then blew it to bits using an explosive charge.
- North America > United States > California (0.55)
- Asia > China (0.05)
- North America > United States > Texas (0.05)
- (7 more...)
- Information Technology (1.00)
- Government > Military (1.00)
- Government > Regional Government > North America Government > United States Government (0.96)
- North America > United States (0.04)
- Europe > United Kingdom (0.04)
- Europe > Russia (0.04)
- (6 more...)
- Research Report > New Finding (0.67)
- Instructional Material (0.67)
- Research Report > Promising Solution (0.45)